Search Results: "sjr"

30 April 2008

Simon Richter: Applicability (i.e. the real world)

Teaching dammit to link against Boost proves to be rather tricky, because they already have a scheme for encoding ABI information into file names (in fact, multiple that can be selected at build time). It might make sense to allow for sites (not packages!) to provide scripts in a Turing complete language to allow scanning for other packages. I shall call that "legacy support".

23 April 2008

Simon Richter: EPIC FAIL

Commands that should not be used on a LUKS partition:
  1. pvcreate

28 March 2008

Simon Richter: "dammit" preview

I've just uploaded a small "preview" of dammit. It doesn't do too much yet, and falls over more often than not, but it shows the general direction the tool is going to take. Available from the dump or attached to this blog entry.

27 March 2008

Simon Richter: Pump Up The Volume

Filip, I like my music loud, that's one of the main reasons I go to clubs in the first place. I also find it acceptable that people in the clubs I go to smoke, even if I don't. Coolness of earplugs is achieved by buying them in military shops, by the way.

19 March 2008

Simon Richter: Scary thought

I just switched to the Windows 2008 Server SDK (from 2003). So far, no problems. I can almost say I like it.

16 March 2008

Simon Richter: What is Debian's role?

I must admit that I feel slightly uneasy about this, because it blurs the distinction between maintaining a package and forking it even more than it is already. Some improvements have long been necessary, since we really need to handle more archive formats, or the situation where upstream ships five tarballs and two patches, however I think the quilt support is a step in the wrong direction. There is nothing wrong with quickly fixing a problem in a package so Debian users do not have to wait for the fix to be incorporated into an upstream release; however for that we don't need a sophisticated system to keep patches separate and forward-port them to new upstream releases easily. Let's face it: the quilt integration into the build process exists because we expect to maintain these patches for months. On the other hand, there is an effort to synchronize these changes between distributions. The question to me is: if all the distributions can agree that a certain change has to be made, why cannot this change be committed upstream? Is Debian failing to submit patches, or are our patches just unacceptable because they break every configuration except ours? I can absolutely see the need for a tool to assist in forks, ideally one that will allow us to easily submit these patches to upstream developers easily, and ideally so they can pick them up with a minimum amount of hassle. Now why does this entire discussion sound so incredibly familiar?

12 March 2008

Simon Richter: Must... not... scream

Windows knows about uint32_t. You are cordially invited to guess whether the same is true for uint16_t. Why they have chosen to ship debug info only for the non-debug version of their C and C++ libraries is also slightly beyond me.

5 March 2008

Simon Richter: Input formatting with iostreams

Enrico, you were probably looking for std::cout << (std::stringstream() << "foo" << std::setw(8) << "bar").rdbuf() << std::endl; which is a WTF in itself.

3 March 2008

Simon Richter: To patch or not to patch?

Two or three days ago, there was a discussion on dpatch, quilt and all the other patch management systems on #debian-devel. My opinion still stands: if you change upstream source, then it better be in a way that upstream finds acceptable. But if I expect my changes to be merged quickly, why do I need an elaborate scheme to manage them? Or is a significant percentage of our upstreams unresponsive enough to warrant a fork?

25 February 2008

Simon Richter: On the road

The next two days I'm going to be at the "embedded world", presenting all the stuff we developed over the last year. There are still four hours to go until we're leaving, so I'm probably going to get a bit of sleep, it will be needed as the next opportunity will in about 22 hours.

21 February 2008

Simon Richter: Lazyweb: not hogging the event loop

I'm (again) writing a library that is supposed to be cross-platform, but needs sort of an event loop, which I'd rather leave in the main program and just give them a bunch of file descriptors to watch and a timeout. Problem is, file descriptors are platform dependent. There are a few approaches to this. The mighty ASIO simply annexes the entire loop, and just tells people to hook into it if they need anything -- it works, but on the downside you get threads for free when you didn't want or need them. Simply spawning a new thread for everyone isn't a solution either, as it requires me to do lots of synchronised inter-thread communication -- and on Windows, file handles are thread affine, so I cannot even call into the same library from different threads. I will probably go with ASIO for this project, but I'm fairly certain that there are better solutions out there.

15 February 2008

Simon Richter: More building

I've gone over the design for "dammit" again, and I think I can even get away with non-specialised nodes for the various compilation tasks in the work tree, which will make support for additional input languages pluggable, an idea I tossed in the beginning for fear that it might not be doable. So I finally get to solve one thing I found annoying with automake. :-) The other big feature I'm excited about is having two different modes of importing stuff from libraries, with and without reexporting the ABI. Depending on that, any newly built library will incorporate the SONAME of the dependent library into its own, and its development package will depend on the other -dev package (obviously, for that to work, I'll have to add package building to it as well :-P )

Simon Richter: erm

12 February 2008

Simon Richter: It's cold, we need to burn some CPU power

I've been thinking about building a file system that uses cryptography to lock out unauthorised processes. In essence, user programs would hold a set of keys, and the file system would ask them to sign their changes, or decrypt a block before interpreting its contents for the application. That would considerably raise the bar for successful attacks, as it is now necessary in order to write to a system directory that you subvert a process with appropriate privileges and make it do the work for you.

1 February 2008

Simon Richter: Guards, guards!

Another design decision for "dammit" is how to handle dependency loops between header files -- these may be intentional, but most often are not, and I'd like to find these because they often lead to confusing error messages.

31 January 2008

Simon Richter: I can't get no sleep

Not being able to sleep sucks. It is now 6 AM, and if I wanted to get up as planned, I have less than two hours. Since I can't concentrate at all with that little sleep, I guess I will make it at least four hours. Still, tomorrow will suck, especially as I will be dealing with crypto code.

29 January 2008

Simon Richter: Milestone 0.9

The not-yet-so-mighty project build tool I'm writing (codenamed "dammit") just compiled itself for the first time (took 40 seconds :-P ). It is probably not that usable yet, as it lacks dependency checking and thus always recompiles everything -- fixing that is the plan for tonight. So far, all it does is compile everything it can find and recognise in a directory and link it to a program; the plan for the future is not to add too much to that, mainly new project types like libraries and new input languages.

28 January 2008

Simon Richter: Building software II: now with more evil

As written earlier, I'm switching build system for one of my projects. In the end, I decided to roll my own. Yes, another build system out there. The general idea is to combine the strengths of automake and MSBuild, and take them to new extremes by not compromising where they have (that makes my build system a tad inflexible though). Automake's biggest strength is that it is descriptive, rather than imperative, and while it allows for rules to be entered directly to allow for some flexibility, this is usually the point where the Makefiles that are produced become unportable. But in general, the idea of having a tool deduce what exactly should be done to transform a set of input files into a set of output files is a good one, I think. From MSBuild, I stole the idea of making an explicit distinction between higher-level integration projects ("solutions" in their lingo) and actual build projects. Build projects only have one (main) output, which usually is a program or a library, and several auxiliary outputs (development files, translation templates, ...), while integration projects coordinate the build process (for example, if one of the build projects builds a tool, all projects using that need to be re-run if the tool is rebuilt, but only then. Automake's biggest downfall is stuff that is not explicitly supported. One of my projects uses IDL extensively, which leads to quite a lot of explicit rules supporting different ORBs. If I wanted to teach automake about that, I'd have to understand lots of perl code that emits code fragments that, when combined, will do the right thing. In a way, automake is a compiler, but based around the idea that since we start out from a Makefile compatible syntax, just adding things will magically work. Where it doesn't, special cases are invented, such as for libtool support, but stacking special cases is really difficult. MSBuild, on the other hand, leaves a lot to be desired when it comes to project-to-project integration. There is no way to tell it "publish these include files under the such-and-such directory for use by other projects", rather, you are expected to have the project using them explicitly reference your project in the include directories. If a project uses includes that reference includes from yet another project, it gets worse. Milestone 1 is going to be when the main binary can build itself on Windows and Linux.

23 January 2008

Simon Richter: Building software

Automake seems to support Microsoft's C++ compiler, "cl.exe" to some extent when building simple applications; what it doesn't handle yet are static libraries (dynamic ones are handled by libtool anyway). I wonder whether it would be easier to add the necessary scripting, or if I should write wrapper programs. Or if I should switch build system, again. The code itself is actually rather portable, except for some low-level functionality and some of the interface code to higher levels, which can talk to my code either via sockets (anywhere) or using COM (on Windows). So a good build system would work on many platforms, but allow conditional compilation of objects on some.

10 January 2008

Simon Richter: No technology bitching today

It is a sunny day today. People are friendlier than usual. Life is good.

Next.

Previous.